Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge develop to s3-develop to pick up riak client changes #1282

Open
wants to merge 50 commits into
base: s3-develop
Choose a base branch
from

Conversation

JeetKunDoug
Copy link
Contributor

No description provided.

Brett Hazen and others added 30 commits February 6, 2017 11:54
# Conflicts:
#	tests/ts_cluster_group_by_SUITE.erl
Tweak devrel scripts to work with different versions of git
Fix test failures caused by riak client not allowing list keys
REL-69: Move 2.2 YZ tests to yokozuna repo
fadushin and others added 20 commits March 2, 2017 11:02
Merge latest TS 1.6.x back to develop
make code version-agnostic, only require newly introduced keys and caps to
be registered when stepping to next release up/downgrade cycle.
…any_recent_versions

updowngrade: better prepare riak.conf & caps, make it work for 1.4-1.5 and 1.5-1.6 cycles
This test was part of the original kv679 suite, but at the time of 2.1
it was put in it's own branch. Riak Test is made up of passing tests,
and this test (kv679_dataloss_fb.erl) still fails. Adding this back to a
branch off develop in preparation for a fix of this issue.
This removes the race condition where some nodes are upgrading and others arent in the middle of an active fullsync. We are not testing AAE upgrades with this test so we should disable them.
… expiry interval, in an attempt to get AAE to converge more regularly.
Increased concurrency and lowered sweep interval [JIRA: RIAK-3389]
This test shows a dataloss edge case in riak, even in 2.1 with
per-key-actor-epochs enabled. The test is a litte convoluted, and is
based on a quickcheck counter example, included in the riak_kv/test
directory. In short, both this test, and the other kv679_dataloss_fb
test, show that even with multiple replicas acking/storing, a single
disk error on a single replica is enough to cause acked writes to be
silently and permanently lost. For a replicated database, that is bad.
The partition repair test deletes all the data at a partition, and then
repairs it from neighbouring partitions. The subset of repaired data
that was originally coordinated by the deleted partition's vnode showed
up as `notfound` since the latest kv679 changes here
basho/riak_kv#1643. The reason is that the fix
in the KV repo adds a new actor to the repaired key's vclock. Prior to
this change `verify` in partition_repair.erl did a simple equality check
on the binary encoded riak_object values. This change takes into account
that a new actor may be in the vclock at the repaired vnode, and uses a
semantic equality check based on riak_object merge and riak object
equal.
Re-add old kv679 test and add new test for further dataloss edge case
@hazen
Copy link

hazen commented Apr 5, 2017

I read this as from s3 to develop and got nervous for a minute

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants